Fast Factorization Method
نویسندگان
چکیده
منابع مشابه
Fast Nonnegative Tensor Factorization with an Active-Set-Like Method
We introduce an efficient algorithm for computing a low-rank nonnegative CANDECOMP/PARAFAC (NNCP) decomposition. In text mining, signal processing, and computer vision among other areas, imposing nonnegativity constraints to low-rank factors has been shown an effective technique providing physically meaningful interpretation. A principled methodology for computing NNCP is alternating nonnegativ...
متن کاملAsymptotically Fast Factorization of Integers
The paper describes a "probabilistic algorithm" for finding a factor of any large composite integer n (the required input is the integer n together with an auxiliary sequence of random numbers). It is proved that the expected number of operations which will be required is 0(exp{ /ftln n In In n)1/2}) for some constant ß > 0. Asymptotically, this algorithm is much faster than any previously anal...
متن کاملFast QR factorization of Cauchy-like matrices
n this paper we present two fast numerical methods for computing the QR factorization of a Cauchy-like matrix C with data points lying on the real axis or on the unit circle in the complex plane. It is shown that the rows of the Q-factor of C give the eigenvectors of a rank structured matrix partially determined by some prescribed spectral data. This property establishes a basic connection betw...
متن کاملFast Packet Classification Using Condition Factorization
Rule-based packet classification plays a central role in network intrusion detection systems such as Snort. To enhance performance, these rules are typically compiled into a matching automaton that can quickly identify the subset of rules that are applicable to a given network packet. The principal metrics in the design of such an automaton are its size and the time taken to match packets at ru...
متن کاملCuMF_SGD: Fast and Scalable Matrix Factorization
Matrix factorization (MF) has been widely used in e.g., recommender systems, topic modeling and word embedding. Stochastic gradient descent (SGD) is popular in solving MF problems because it can deal with large data sets and is easy to do incremental learning. We observed that SGD for MF is memory bound. Meanwhile, single-node CPU systems with caching performs well only for small data sets; dis...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: SSRN Electronic Journal
سال: 2017
ISSN: 1556-5068
DOI: 10.2139/ssrn.3046053